The Grammar of Interactive Explanatory Model Analysis

Preprint

Baniecki, H., and Biecek, P. The Grammar of Interactive Explanatory Model Analysis arXiv:2005.00497 2021

Available at https://arxiv.org/abs/2005.00497

Abstract

We cannot sufficiently explain a black-box machine learning model using a~single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory interpretations of the same phenomenon. Surprisingly, the majority of methods developed for explainable machine learning focus on a single aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper presents how different Explanatory Model Analysis (EMA) methods complement each other and why it is essential to juxtapose them together. The introduced process of Interactive EMA derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to allow for an unrestricted human-model dialogue. IEMA is implemented in the human-centered framework that adopts interactivity, customizability and automation as its main traits. Combined, these methods enhance the responsible machine learning approach.

Black-Box

IEMA

The Grammar of Interactive Explanatory Model Analysis

modelStudio.gif

Dashboard iema

Created using modelStudio: https://github.com/ModelOriented/modelStudio

Citation

To cite this work, use the following BibTeX entry:

@article{baniecki-iema,
  title={{The Grammar of Interactive Explanatory Model Analysis}}, 
  author={Hubert Baniecki and Przemyslaw Biecek},
  journal={arXiv:2005.00497},
  year={2021},
  url={https://arxiv.org/abs/2005.00497}
}